Limitless AI - not the best idea.

AI (Artificial Intelligence) nowadays seems to be a perfect answer to one of the big problems in communication between human and computers, constructing things etc.

There are some degrees of AI

The lowest is a "simple AI" which understands spoken orders and executes programs. We are very near to get this AI.

The next level would be a "real AI" which has some kind of intelligence. We are far away of getting this form of intelligence, despite of some separated aspects of it.

This "real AI" has the ability to self program, some range of independence from Orders, to be able to find own ways to execute the orders.

"Real AI" is not a single level, there are many degrees of it.

Compared to animals there could be

an ant

self moving, self feeding, low range of finding new solutions to any problem.

a dog

same as an ant but a certain degree of finding solutions to problems and a certain degree of making new problems, some independence of orders

a cat

same as a dog but more new problems, greater independence of orders (If you happen to live with a cat, you know what this means.)

You can't really 'nail the point', from where on an animal has this "real intelligence". It's even difficult to define it. Same with personality (which cat's and dog's have). And there is no proof, if intelligence and personality are dependent or not.

If you compare the intelligence of an animal with that of a human beeing, you will see some matching and some not matching points. But animals are far more similar to human - there is the need of breething, drinking, eating, reproduction etc. which we have in common to any animal - as any computer ever will be: So we can most certainly only guess, what kind this AI would be, because it is NOT driven by any instincts. (?)

The more the "real AI" develops (from some point on by itself?) the more these differences may be.

SIMPLE AI

The idea of simply talk and the computer realizes, what was meant and instantly react is a catching one.

As in almost every idea, there is a "but". To be honest, there are several "but"s.

One aspect is the problem, the AI has, to understand, what the intention 'behind the words' may be. And this is a fundamental problem:

Computer programming is, to write a program code, which the machine can understand and then execute. All programming languages are extremely reduced (compared to any human language) and absolutely exact. For the following, I'm referring to so called "higher-level" languages, which are translated into machine language for one specific Computer, to be executed.

One word in such a programming language is for example "echo" (Java, C, C++, PHP etc.)

echo "hello world!"

is one of the most common first example for beginning programmers.

It means, that the computer should write

hello world!

on the screen. (And don't write the " and do nothing else!)

Whenever the computer "reads" the word 'echo' (i.e. the machine language equivalent of it) it knows exactly, what to do. And it would write to the screen every time and do nothing else every time. (if it's not broken)

In human languages (English, German) the word 'echo' has several meanings:

  • Echo was an ancient Greek mountain nymph
  • echo is short for echocardiogram (electrocardiogram)
  • echo is English for replication
  • echo is English and German for the duplicated sound, coming back of some solid material (mountain, wall)
  • ...

If echo is used in a sentence, a human almost instantly knows, what the word "means". And the astounding thing is, that - at this moment - about 99 % don't even are aware of the fact, that there are other meanings, while using the word. And depending on the education, with some words don't even know, that there are other meanings of that word.

This is the first level of differences between human and programming languages. They can - with big effort - to a certain degree be foreseen and a solution programmed. And to be able to distinguish a "problem" of words with different meanings, the AI has to be extremely high educated, otherwise it can not ask, what is meant, because it is not aware of any conflict. ("destroy this bond" could mean to destroy a 'loan' a 'ribbon' or even 007)

Until now it leads to a very restricted vocabulary, which can be used. The fact, that there are AI which can "translate" spoken word to text does NOT mean, that the AI understand, what the text means.

The spoken commands are a small part of the "real" human language. Which is - now - not, what it should (an in the future will) be.

Try "Siri, before you play 'Smoke on the water' please open my mail and read any mail from John, not older then 3 weeks"

And enjoy!

(Who is not older then 3 weeks? John? The Mail? WTH is the sense of that command? Does it mean, that Siri rather should read the mails and if there are some, afterwards NOT play or DO play 'Smoke on the water'?)

The second level of possible misunderstandings - which in my opinion can not easily be solved - is the difference of meaning of a word, when spoken in different ways. Not only the Chinese language, where a different pitch or emphasis gives a different meaning.

Human can say "Life is sometimes so and sometimes so". Which gives perfect sense, when 'so' is spoken with different emphasis. Which will gives perfect nonsense for a AI which is not capable of estimating pitch / emphasis in one sentence, which would need some musical skills for the AI.

The third level of misunderstandings is the non spoken content of an order.

Isaac Asimov (Boston University, biochemist, author of many fictional and nonfictional books) has written many 'Robot' stories, where he describes the difficulties of interpretation between human intelligence and AI.

When a computer doesn't understand a command and asks for more specific details, and a human says "Oh forget the whole thing". What does that mean? Should (as described in the story) really the "whole thing" be forgotten? In my example, does it mean that Siri should forget to play 'Smoke on the water' and forget to read the mails?

And for a computer, does "forget" mean erase completely. Which would mean, should the meaning of 'play' and 'read mail' also be forgotten (certainly not).

So which part should be executed and to what 'degree'?

Which leads to a fundamental problem - if "real AI" in all consequences is meant - which part of a command is to be obeyed and which part not. And what are the rules for not obeying a command, or a part of it.

If "real AI" some time from now could mean, that the computer has some kind of personality, what kind of personality is meant. Is morality included? Or worse, is it excluded?

Where are the limits?

A newer example of a "real AI" is the case of Facebook. A Facebook-AI had to be shut down, because it began to talk in a new - not human understandable - language. The AI was ordered, to "talk" to another computer. Suddenly the two bots began to talk in a non understandable language, because nobody had told them before, that they should keep the "human readable" language.

This example seems harmless. It was definitely not considered harmless by the technicians, who 'pulled the plug'. And if one imagines some AI machines in a dangerous content, this is not at all harmless. For example a machine for dropping bombs, which might be controlled by an AI "supervisor" and both beginning to talk 'gibberish'. (And definitely YES, AI will be used by the army, because they are enormous faster than any human can be. And every single invention was used in war against human.)

Combine this with a "real AI", which is based in some kind of internet, which means it is distributed to several machines and therefore there might be no quick and easy way to 'pull the plug' and combine it with the possibility, of not executing some parts of a command (or the whole command).

Another more complex example is the self driving car.

If a car with some AI realizes through it's sensors, that there is an unmovable obstacle in front of the car (a tree, lying on our part of the road), there is a mother with a little child, crossing the road from the other side and the sign for her says ‘stop’ and there is no chance to break early enough, so our car would smash either into the tree or into the mother with child. How should it decide? Which one is to kill? The own driver (more worth, because it is the "own" one? Why?), the mother and child, because they are soft and the own driver will not be harmed? Kill the own driver, because he is alone and the others are two? But she walks, where the sign is ‘stop’ ? What if the driver has a Nobel price and the women seems silly?

Which leads to the fact, that there must be some rules, which can not be ignored by the AI.

Which leads back to Isaac Asimov, who has invented some rules in his robot stories. The so called "Three laws of robotics" (later on there were 4) which are an extremely simplified moral for AIs.

Ulrike Barthelmess and Ulrich Furbach at the University of Koblenz in Germany nowadays said, that there is no need of rules, because that "fear is unfounded".

(More details on https://www.technologyreview.com/s/527336/do-we-need-asimovs-laws/ where you can find the laws and further details)

I respectfully and completely disagree with that.

I wonder, if those scientists considered the fact, that there is another point: Not the AI by itself may be dangerous, but the combination AI with mean subjects as terrorists, crackers (correct word for bad hackers) and even secret services or governments, who don't care about human rights (rights of men) has to be taken care of. And the more simple way would be, to prevent the AI for even beeing able to harm mankind.

The fear is perhaps NOW unfounded, but we all should consider a very fast progress.

AI Projects, not commonly known.

There are some AI projects, which now are working with "simple AI"

There is a very interesting Open Source Artificial Intelligence Project (https://mycroft.ai/) and there are NOW things possible, no one had thought it could be realized 5 years ago: Making a presentation, where the spoken text is displayed in real time and the 'slides' are automatically presented (https://mycroft.ai/mycroft-new-presentation-feature/). I don't think, that the AI behind that really to 100% understands, what the presentation is about, but the spelling is correct. And I am not to 100% shure, if this is true ;-)

Warning

If we don't implement some rules NOW there will be no time, to implement it later. Simply because if we implement it later, they will and cannot be the BASE of all reactions of an AI and all the work till then could be therefore lost.

If we don't even think about, how to implement these rules, we will definitely never find a way.

 

A - A

 

C.E.Z-Software HgmbH
Wienerfeldgasse 34
1100 Wien

mail info(at)cezsoft.com

 

                                        Neu 10er Block im Shop